Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Baldwin Effect in Advancing Generalizability of Physics-Informed Neural Networks (2312.03243v2)

Published 6 Dec 2023 in cs.NE, cs.CE, and cs.LG

Abstract: Physics-informed neural networks (PINNs) are at the forefront of scientific machine learning, making possible the creation of machine intelligence that is cognizant of physical laws and able to accurately simulate them. However, today's PINNs are often trained for a single physics task and require computationally expensive re-training for each new task, even for tasks from similar physics domains. To address this limitation, this paper proposes a pioneering approach to advance the generalizability of PINNs through the framework of Baldwinian evolution. Drawing inspiration from the neurodevelopment of precocial species that have evolved to learn, predict and react quickly to their environment, we envision PINNs that are pre-wired with connection strengths inducing strong biases towards efficient learning of physics. A novel two-stage stochastic programming formulation coupling evolutionary selection pressure (based on proficiency over a distribution of physics tasks) with lifetime learning (to specialize on a sampled subset of those tasks) is proposed to instantiate the Baldwin effect. The evolved Baldwinian-PINNs demonstrate fast and physics-compliant prediction capabilities across a range of empirically challenging problem instances with more than an order of magnitude improvement in prediction accuracy at a fraction of the computation cost compared to state-of-the-art gradient-based meta-learning methods. For example, when solving the diffusion-reaction equation, a 70x improvement in accuracy was obtained while taking 700x less computational time. This paper thus marks a leap forward in the meta-learning of PINNs as generalizable physics solvers. Sample codes are available at \url{https://github.com/chiuph/Baldwinian-PINN}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Cuomo, S., Di Cola, V.S., Giampaolo, F., Rozza, G., Raissi, M., Piccialli, F.: Scientific machine learning through physics–informed neural networks: Where we are and what’s next. Journal of Scientific Computing 92(3), 88 (2022) https://doi.org/10.1007/s10915-022-01939-z Karniadakis et al. [2021] Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  2. Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  3. Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  4. Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  5. Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  6. Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  7. Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  8. Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  9. Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  10. Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  11. Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  12. Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  13. Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  14. Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  15. Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  16. Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  17. Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  18. Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  19. Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  20. Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  21. Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  22. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  23. Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  24. Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  25. Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  26. Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  27. Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  28. Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  29. Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  30. Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  31. Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  32. Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  33. Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  34. Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  35. Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  36. Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  37. Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  38. Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  39. Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  40. Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  41. Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  42. Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  43. Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  44. Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  45. Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  46. Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  47. Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  48. Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
  49. Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
Citations (2)

Summary

  • The paper introduces a Baldwinian meta-learning strategy that pre-wires PINNs for robust physics modeling using evolutionary algorithms.
  • The paper demonstrates notable computational speedups and accuracy improvements on linear and nonlinear ODE/PDE problems via a rapid Moore-Penrose pseudoinverse approach.
  • The paper highlights the potential of B-PINNs to generalize across diverse physical scenarios, paving the way for efficient and adaptable scientific machine learning models.

Introduction

Physics-informed neural networks (PINNs) epitomize a promising branch of scientific machine learning, aiming to design machine intelligence that integrates physical laws into its learning process. These networks incorporate known physical constraints into the learning model, lending them the capability to make predictions that adhere to these constraints. While PINNs have seen applications across various scientific realms, they face limitations in generalizing to physics scenarios that remain unexplored during their training phase. Retraining or fine-tuning models for new scenarios can be computationally intensive, propelling research into methods that allow PINNs to learn efficiently.

Baldwin Effect and Neural Evolution

The paper proposes a fresh approach to the meta-learning of PINNs via the Baldwin effect, a concept from evolutionary biology suggesting that learned abilities can over time become innate through natural selection. Analogous to the development in certain precocial species that are born with innate skills due to selective pressures, the research explores the possibility of pre-wiring a PINN's architecture for robust learning of physics laws. The paper employs an iterative evolutionary algorithm to optimize the network, leading toward PINNs with connection strengths innately inclined towards modeling physics accurately across varied scenarios.

Computational Advantages

The proposed Baldwinian PINNs (B-PINNs) demonstrated exceptional computational benefits. For several linear and nonlinear ODE/PDE problems representative of real-world phenomena, these networks provided fast and accurate predictions on unseen tasks. The optimization relied on lessening the least-squares learning problem for the output layer, which could be rapidly solved using the Moore-Penrose pseudoinverse. Compared to recent meta-learning PINNs, the B-PINNs presented notable computation speedups and accuracy improvements, signifying substantial advances in modeling dynamical systems and scientific phenomena.

Implications for Machine Learning and Physics

The investigation reveals the potential of B-PINNs to serve as a foundation for scientific machine learning models capable of flexibly and reliably predicting a broad array of physical processes. The evolutionary techniques utilized also illustrate the extensive customization possible for neural networks through non-differentiable selection criteria.

Conclusion

The Baldwinian evolution of PINNs explored in this paper opens up promising avenues for creating generalizable and computationally efficient models in scientific machine learning. This novel approach paves the way for quick and accurate predictions across a spectrum of physics tasks, redefining the capabilities and applications of physics-informed machine learning.