Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models (2312.05320v4)

Published 8 Dec 2023 in physics.flu-dyn and cs.LG

Abstract: Leveraging neural networks as surrogate models for turbulence simulation is a topic of growing interest. At the same time, embodying the inherent uncertainty of simulations in the predictions of surrogate models remains very challenging. The present study makes a first attempt to use denoising diffusion probabilistic models (DDPMs) to train an uncertainty-aware surrogate model for turbulence simulations. Due to its prevalence, the simulation of flows around airfoils with various shapes, Reynolds numbers, and angles of attack is chosen as the learning objective. Our results show that DDPMs can successfully capture the whole distribution of solutions and, as a consequence, accurately estimate the uncertainty of the simulations. The performance of DDPMs is also compared with varying baselines in the form of Bayesian neural networks and heteroscedastic models. Experiments demonstrate that DDPMs outperform the other methods regarding a variety of accuracy metrics. Besides, it offers the advantage of providing access to the complete distributions of uncertainties rather than providing a set of parameters. As such, it can yield realistic and detailed samples from the distribution of solutions. We also evaluate an emerging generative modeling variant, flow matching, in comparison to regular diffusion models. The results demonstrate that flow matching addresses the problem of slow sampling speed typically associated with diffusion models. As such, it offers a promising new paradigm for uncertainty quantification with generative models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (101)
  1. Han, Z., and Reitz, R. D., “Turbulence Modeling of Internal Combustion Engines Using RNG k-ε Models,” Combustion Science and Technology, Vol. 106, No. 4-6, 1995, pp. 267–295. 10.1080/00102209508907782, URL https://doi.org/10.1080/00102209508907782.
  2. Lumley, J. L., “Early work on fluid mechanics in the IC engine,” Annual Review of Fluid Mechanics, Vol. 33, No. 1, 2001, pp. 319–338. 10.1146/annurev.fluid.33.1.319, URL https://doi.org/10.1146/annurev.fluid.33.1.319.
  3. 10.1146/annurev.fl.05.010173.001003, URL https://doi.org/10.1146/annurev.fl.05.010173.001003.
  4. Drela, M., and Giles, M. B., “Viscous-inviscid analysis of transonic and low Reynolds number airfoils,” AIAA Journal, Vol. 25, No. 10, 1987, pp. 1347–1355. 10.2514/3.9789, URL https://doi.org/10.2514/3.9789.
  5. Alfonsi, G., “Reynolds-Averaged Navier–Stokes Equations for Turbulence Modeling,” Applied Mechanics Reviews, Vol. 62, No. 4, 2009. 10.1115/1.3124648, URL https://doi.org/10.1115/1.3124648, 040802.
  6. Georgiadis, N. J., Rizzetta, D. P., and Fureby, C., “Large-Eddy Simulation: Current Capabilities, Recommended Practices, and Future Research,” AIAA Journal, Vol. 48, No. 8, 2010, pp. 1772–1784. 10.2514/1.J050232, URL https://doi.org/10.2514/1.J050232.
  7. Argyropoulos, C., and Markatos, N., “Recent advances on the numerical modelling of turbulent flows,” Applied Mathematical Modelling, Vol. 39, No. 2, 2015, pp. 693–732. https://doi.org/10.1016/j.apm.2014.07.001, URL https://www.sciencedirect.com/science/article/pii/S0307904X14003448.
  8. Duraisamy, K., Iaccarino, G., and Xiao, H., “Turbulence Modeling in the Age of Data,” Annual Review of Fluid Mechanics, Vol. 51, No. 1, 2019, pp. 357–377. 10.1146/annurev-fluid-010518-040547, URL https://doi.org/10.1146/annurev-fluid-010518-040547.
  9. Iaccarino, G., Mishra, A. A., and Ghili, S., “Eigenspace perturbations for uncertainty estimation of single-point turbulence closures,” Phys. Rev. Fluids, Vol. 2, 2017, p. 024605. 10.1103/PhysRevFluids.2.024605, URL https://link.aps.org/doi/10.1103/PhysRevFluids.2.024605.
  10. Mishra, A. A., Mukhopadhaya, J., Iaccarino, G., and Alonso, J., “Uncertainty Estimation Module for Turbulence Model Predictions in SU2,” AIAA Journal, Vol. 57, No. 3, 2019, pp. 1066–1077. 10.2514/1.J057187, URL https://doi.org/10.2514/1.J057187.
  11. Wang, J.-X., Sun, R., and Xiao, H., “Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches,” International Journal of Heat and Fluid Flow, Vol. 62, 2016, pp. 577–592. https://doi.org/10.1016/j.ijheatfluidflow.2016.07.005, URL https://www.sciencedirect.com/science/article/pii/S0142727X16300339.
  12. Xiao, H., Wang, J.-X., and Ghanem, R. G., “A random matrix approach for quantifying model-form uncertainties in turbulence modeling,” Computer Methods in Applied Mechanics and Engineering, Vol. 313, 2017, pp. 941–965. https://doi.org/10.1016/j.cma.2016.10.025, URL https://www.sciencedirect.com/science/article/pii/S0045782516313822.
  13. Najm, H. N., “Uncertainty Quantification and Polynomial Chaos Techniques in Computational Fluid Dynamics,” Annual Review of Fluid Mechanics, Vol. 41, No. 1, 2009, pp. 35–52. 10.1146/annurev.fluid.010908.165248, URL https://doi.org/10.1146/annurev.fluid.010908.165248.
  14. Roberts, B., Lind, R., and Kumar, M., “Polynomial Chaos Analysis of MAV’s in Turbulence,” AIAA Atmospheric Flight Mechanics Conference, AIAA, 2011. URL https://arc.aiaa.org/doi/abs/10.2514/6.2011-6214.
  15. Brunton, S. L., Noack, B. R., and Koumoutsakos, P., “Machine Learning for Fluid Mechanics,” Annual Review of Fluid Mechanics, Vol. 52, No. 1, 2020, pp. 477–508. 10.1146/annurev-fluid-010719-060214, URL https://doi.org/10.1146/annurev-fluid-010719-060214.
  16. Vinuesa, R., and Brunton, S. L., “Enhancing computational fluid dynamics with machine learning,” Nature Computational Science, Vol. 2, No. 6, 2022, pp. 358–366. 10.1038/s43588-022-00264-7, URL https://doi.org/10.1038/s43588-022-00264-7.
  17. 10.2514/6.2015-1287, URL https://arc.aiaa.org/doi/abs/10.2514/6.2015-1287.
  18. Durbin, P. A., “Some Recent Developments in Turbulence Closure Modeling,” Annual Review of Fluid Mechanics, Vol. 50, No. 1, 2018, pp. 77–103. 10.1146/annurev-fluid-122316-045020, URL https://doi.org/10.1146/annurev-fluid-122316-045020.
  19. Ling, J., and Templeton, J., “Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty,” Physics of Fluids, Vol. 27, No. 8, 2015. 10.1063/1.4927765, URL https://doi.org/10.1063/1.4927765, 085103.
  20. Thuerey, N., Weissenow, K., Prantl, L., and Hu, X., “Deep Learning Methods for Reynolds-Averaged Navier-Stokes Simulations of Airfoil Flows,” AIAA Journal, Vol. 58, No. 1, 2020, pp. 25–36. 10.2514/1.j058291, URL http://arxiv.org/abs/1810.08217, arXiv:1810.08217 [physics, stat].
  21. Chen, L.-W., Cakal, B. A., Hu, X., and Thuerey, N., “Numerical investigation of minimum drag profiles in laminar flow using deep learning surrogates,” Journal of Fluid Mechanics, Vol. 919, 2021, p. A34. 10.1017/jfm.2021.398.
  22. Sabater, C., Stürmer, P., and Bekemeyer, P., “Fast Predictions of Aircraft Aerodynamics Using Deep-Learning Techniques,” AIAA Journal, Vol. 60, No. 9, 2022, pp. 5249–5261. 10.2514/1.J061234, URL https://doi.org/10.2514/1.J061234.
  23. Chen, L.-W., and Thuerey, N., “Towards high-accuracy deep learning inference of compressible flows over aerofoils,” Computers & Fluids, Vol. 250, 2023, p. 105707. https://doi.org/10.1016/j.compfluid.2022.105707, URL https://www.sciencedirect.com/science/article/pii/S0045793022003000.
  24. https://doi.org/10.1002/9781118033197.ch2, URL https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118033197.ch2.
  25. Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., Fieguth, P., Cao, X., Khosravi, A., Acharya, U. R., Makarenkov, V., and Nahavandi, S., “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Information Fusion, Vol. 76, 2021, pp. 243–297. https://doi.org/10.1016/j.inffus.2021.05.008, URL https://www.sciencedirect.com/science/article/pii/S1566253521001081.
  26. Denker, J. S., and LeCun, Y., “Transforming Neural-Net Output Levels to Probability Distributions,” Proceedings of the 3rd International Conference on Neural Information Processing Systems, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1990, p. 853–859.
  27. MacKay, D. J. C., “A Practical Bayesian Framework for Backpropagation Networks,” Neural Computation, Vol. 4, No. 3, 1992, pp. 448–472. 10.1162/neco.1992.4.3.448, URL https://doi.org/10.1162/neco.1992.4.3.448.
  28. Wang, H., and Yeung, D.-Y., “A Survey on Bayesian Deep Learning,” ACM Comput. Surv., Vol. 53, No. 5, 2020. 10.1145/3409383, URL https://doi.org/10.1145/3409383.
  29. Tang, H., Wang, Y., Wang, T., Tian, L., and Qian, Y., “Data-driven Reynolds-averaged turbulence modeling with generalizable non-linear correction and uncertainty quantification using Bayesian deep learning,” Physics of Fluids, Vol. 35, No. 5, 2023. 10.1063/5.0149547, URL https://doi.org/10.1063/5.0149547.
  30. Qiu, C., Huang, Q., and Pan, G., “Transient velocity field prediction and uncertainty quantification of pump-jet propulsor using variational Bayesian neural networks,” Ocean Engineering, Vol. 281, 2023, p. 114555. https://doi.org/10.1016/j.oceaneng.2023.114555, URL https://www.sciencedirect.com/science/article/pii/S0029801823009393.
  31. Geneva, N., and Zabaras, N., “Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks,” Journal of Computational Physics, Vol. 383, 2019, pp. 125–147. https://doi.org/10.1016/j.jcp.2019.01.021, URL https://www.sciencedirect.com/science/article/pii/S0021999119300464.
  32. Paté-Cornell, M., “Uncertainties in risk analysis: Six levels of treatment,” Reliability Engineering & System Safety, Vol. 54, No. 2, 1996, pp. 95–111. https://doi.org/10.1016/S0951-8320(96)00067-1, URL https://www.sciencedirect.com/science/article/pii/S0951832096000671, treatment of Aleatory and Epistemic Uncertainty.
  33. Kiureghian, A. D., and Ditlevsen, O., “Aleatory or epistemic? Does it matter?” Structural Safety, Vol. 31, No. 2, 2009, pp. 105–112. https://doi.org/10.1016/j.strusafe.2008.06.020, URL https://www.sciencedirect.com/science/article/pii/S0167473008000556, risk Acceptance and Risk Communication.
  34. Hüllermeier, E., and Waegeman, W., “Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods,” Machine Learning, Vol. 110, No. 3, 2021, pp. 457–506. 10.1007/s10994-021-05946-3.
  35. Nix, D., and Weigend, A., “Estimating the mean and variance of the target probability distribution,” Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Vol. 1, 1994, pp. 55–60 vol.1. 10.1109/ICNN.1994.374138.
  36. Kendall, A., and Gal, Y., “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 5580–5590.
  37. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems, Vol. 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
  38. Kingma, D. P., and Welling, M., “Auto-Encoding Variational Bayes,” 2nd International Conference on Learning Representations, 2014.
  39. El-Kaddoury, M., Mahmoudi, A., and Himmi, M. M., “Deep Generative Models for Image Generation: A Practical Comparison Between Variational Autoencoders and Generative Adversarial Networks,” Mobile, Secure, and Programmable Networking, edited by É. Renault, S. Boumerdassi, C. Leghris, and S. Bouzefrane, Springer International Publishing, Cham, 2019, pp. 1–8.
  40. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., and Bharath, A. A., “Generative Adversarial Networks: An Overview,” IEEE Signal Processing Magazine, Vol. 35, No. 1, 2018, pp. 53–65. 10.1109/MSP.2017.2765202.
  41. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S., “Deep Unsupervised Learning using Nonequilibrium Thermodynamics,” Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 37, edited by F. Bach and D. Blei, PMLR, Lille, France, 2015, pp. 2256–2265. URL https://proceedings.mlr.press/v37/sohl-dickstein15.html.
  42. Song, Y., and Ermon, S., “Generative Modeling by Estimating Gradients of the Data Distribution,” Proceedings of the 33rd International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, 2019.
  43. Ho, J., Jain, A., and Abbeel, P., “Denoising Diffusion Probabilistic Models,” Advances in Neural Information Processing Systems, Vol. 33, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Curran Associates, Inc., 2020, pp. 6840–6851. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf.
  44. Dhariwal, P., and Nichol, A., “Diffusion Models Beat GANs on Image Synthesis,” Advances in Neural Information Processing Systems, Vol. 34, edited by M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Curran Associates, Inc., 2021, pp. 8780–8794. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf.
  45. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B., “High-resolution image synthesis with latent diffusion models,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10684–10695.
  46. Austin, J., Johnson, D. D., Ho, J., Tarlow, D., and van den Berg, R., “Structured Denoising Diffusion Models in Discrete State-Spaces,” Advances in Neural Information Processing Systems, edited by A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, 2021. URL https://openreview.net/forum?id=h7-XixPCAL.
  47. Li, X. L., Thickstun, J., Gulrajani, I., Liang, P., and Hashimoto, T., “Diffusion-LM Improves Controllable Text Generation,” Advances in Neural Information Processing Systems, edited by A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, 2022. URL https://openreview.net/forum?id=3s9IrEsjLyk.
  48. Xie, T., Fu, X., Ganea, O.-E., Barzilay, R., and Jaakkola, T. S., “Crystal Diffusion Variational Autoencoder for Periodic Material Generation,” International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=03RLpj-tc_.
  49. Luo, S., Su, Y., Peng, X., Wang, S., Peng, J., and Ma, J., “Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures,” Advances in Neural Information Processing Systems, edited by A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, 2022. URL https://openreview.net/forum?id=jSorGn2Tjg.
  50. Chung, H., and Ye, J. C., “Score-based diffusion models for accelerated MRI,” Medical Image Analysis, Vol. 80, 2022, p. 102479. https://doi.org/10.1016/j.media.2022.102479, URL https://www.sciencedirect.com/science/article/pii/S1361841522001268.
  51. Peng, C., Guo, P., Zhou, S. K., Patel, V. M., and Chellappa, R., “Towards Performant and Reliable Undersampled MR Reconstruction via Diffusion Model Sampling,” Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, edited by L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, Springer Nature Switzerland, Cham, 2022, pp. 623–633.
  52. Shu, D., Li, Z., and Barati Farimani, A., “A physics-informed diffusion model for high-fidelity flow field reconstruction,” Journal of Computational Physics, Vol. 478, 2023a, p. 111972. https://doi.org/10.1016/j.jcp.2023.111972, URL https://www.sciencedirect.com/science/article/pii/S0021999123000670.
  53. Holzschuh, B. J., Vegetti, S., and Thuerey, N., “Score Matching via Differentiable Physics,” , 2023. 10.48550/arXiv.2301.10250.
  54. Hui, X., Bai, J., Wang, H., and Zhang, Y., “Fast pressure distribution prediction of airfoils using deep learning,” Aerospace Science and Technology, Vol. 105, 2020, p. 105949. https://doi.org/10.1016/j.ast.2020.105949, URL https://www.sciencedirect.com/science/article/pii/S1270963820306313.
  55. Sun, D., Wang, Z., Qu, F., and Bai, J., “A deep learning based prediction approach for the supercritical airfoil at transonic speeds,” Physics of Fluids, Vol. 33, No. 8, 2021, p. 086109. 10.1063/5.0060604, URL https://doi.org/10.1063/5.0060604.
  56. Yang, Y., Li, R., Zhang, Y., and Chen, H., “Flowfield Prediction of Airfoil Off-Design Conditions Based on a Modified Variational Autoencoder,” AIAA Journal, Vol. 60, No. 10, 2022, pp. 5805–5820. 10.2514/1.J061972, URL https://doi.org/10.2514/1.J061972.
  57. Duru, C., Alemdar, H., and Baran, O. U., “A deep learning approach for the transonic flow field predictions around airfoils,” Computers & Fluids, Vol. 236, 2022, p. 105312. https://doi.org/10.1016/j.compfluid.2022.105312, URL https://www.sciencedirect.com/science/article/pii/S0045793022000068.
  58. URL https://books.google.de/books?id=G0_HxBubGAwC.
  59. Xie, Y., Franz, E., Chu, M., and Thuerey, N., “TempoGAN: A Temporally Coherent, Volumetric GAN for Super-Resolution Fluid Flow,” ACM Trans. Graph., Vol. 37, No. 4, 2018. 10.1145/3197517.3201304, URL https://doi.org/10.1145/3197517.3201304.
  60. Druault, P., and Chaillou, C., “Use of Proper Orthogonal Decomposition for reconstructing the 3D in-cylinder mean-flow field from PIV data,” Comptes Rendus Mécanique, Vol. 335, No. 1, 2007, pp. 42–47. https://doi.org/10.1016/j.crme.2006.11.004, URL https://www.sciencedirect.com/science/article/pii/S1631072106002075.
  61. Franz, E., Solenthaler, B., and Thuerey, N., “Global Transport for Fluid Reconstruction With Learned Self-Supervision,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 1632–1642.
  62. Pei, G., and Rim, D., “Quality control of computational fluid dynamics (CFD) model of ozone reaction with human surface: Effects of mesh size and turbulence model,” Building and Environment, Vol. 189, 2021, p. 107513. https://doi.org/10.1016/j.buildenv.2020.107513, URL https://www.sciencedirect.com/science/article/pii/S0360132320308805.
  63. Perot, J. B., and Gadebusch, J., “A self-adapting turbulence model for flow simulation at any mesh resolution,” Physics of Fluids, Vol. 19, No. 11, 2007, p. 115105. 10.1063/1.2780195, URL https://doi.org/10.1063/1.2780195.
  64. Jasak, H., “Error Analysis and Estimation for the Finite Volume Method with Applications to Fluid Flows,” Ph.D. thesis, University of London and Diploma of Imperial College, 1996.
  65. 10.1063/1.168744.
  66. SPALART, P., and ALLMARAS, S., “A one-equation turbulence model for aerodynamic flows,” AIAA, 1992. 10.2514/6.1992-439, URL https://arc.aiaa.org/doi/abs/10.2514/6.1992-439.
  67. Group, U. A. A., “UIUC Airfoil Coordinates Database,” https://m-selig.ae.illinois.edu/ads/coord_database.html, 2023.
  68. Nichol, A. Q., and Dhariwal, P., “Improved Denoising Diffusion Probabilistic Models,” Proceedings of the 38th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 139, edited by M. Meila and T. Zhang, PMLR, 2021, pp. 8162–8171. URL https://proceedings.mlr.press/v139/nichol21a.html.
  69. Kingma, D. P., Salimans, T., and Welling, M., “Variational Dropout and the Local Reparameterization Trick,” Advances in Neural Information Processing Systems, Vol. 28, edited by C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper_files/paper/2015/file/bc7316929fe1545bf0b98d114ee3ecb8-Paper.pdf.
  70. Lyu, Z., Kong, Z., XU, X., Pan, L., and Lin, D., “A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion,” International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=wqD6TfbYkrn.
  71. Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., and Norouzi, M., “Palette: Image-to-Image Diffusion Models,” ACM SIGGRAPH 2022 Conference Proceedings, Association for Computing Machinery, New York, NY, USA, 2022. 10.1145/3528233.3530757, URL https://doi.org/10.1145/3528233.3530757.
  72. Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., and Norouzi, M., “Image Super-Resolution via Iterative Refinement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 4, 2023, pp. 4713–4726. 10.1109/TPAMI.2022.3204461.
  73. Shu, D., Li, Z., and Barati Farimani, A., “A physics-informed diffusion model for high-fidelity flow field reconstruction,” Journal of Computational Physics, Vol. 478, 2023b, p. 111972. https://doi.org/10.1016/j.jcp.2023.111972, URL https://www.sciencedirect.com/science/article/pii/S0021999123000670.
  74. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R., “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, Vol. 15, No. 56, 2014, pp. 1929–1958. URL http://jmlr.org/papers/v15/srivastava14a.html.
  75. Gal, Y., and Ghahramani, Z., “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,” Proceedings of The 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 48, edited by M. F. Balcan and K. Q. Weinberger, PMLR, New York, New York, USA, 2016, pp. 1050–1059. URL https://proceedings.mlr.press/v48/gal16.html.
  76. Kupinski, M. A., Hoppin, J. W., Clarkson, E., and Barrett, H. H., “Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques,” J. Opt. Soc. Am. A, Vol. 20, No. 3, 2003, pp. 430–438. 10.1364/JOSAA.20.000430, URL https://opg.optica.org/josaa/abstract.cfm?URI=josaa-20-3-430.
  77. Chen, T., Fox, E. B., and Guestrin, C., “Stochastic Gradient Hamiltonian Monte Carlo,” Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, JMLR.org, 2014, p. II–1683–II–1691.
  78. Hinton, G. E., and van Camp, D., “Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights,” Proceedings of the Sixth Annual Conference on Computational Learning Theory, Association for Computing Machinery, New York, NY, USA, 1993, p. 5–13. 10.1145/168304.168306, URL https://doi.org/10.1145/168304.168306.
  79. Graves, A., “Practical Variational Inference for Neural Networks,” Advances in Neural Information Processing Systems, Vol. 24, edited by J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, Curran Associates, Inc., 2011. URL https://proceedings.neurips.cc/paper_files/paper/2011/file/7eb3c8be3d411e8ebfab08eba5f49632-Paper.pdf.
  80. Ranganath, R., Gerrish, S., and Blei, D., “Black box variational inference,” Artificial intelligence and statistics, PMLR, 2014, pp. 814–822.
  81. 10.1007/978-94-011-5014-9_12, URL https://doi.org/10.1007/978-94-011-5014-9_12.
  82. Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D., “Weight Uncertainty in Neural Networks,” Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, JMLR.org, 2015, p. 1613–1622.
  83. Wenzel, F., Roth, K., Veeling, B. S., undefinedwiątkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R., and Nowozin, S., “How Good is the Bayes Posterior in Deep Neural Networks Really?” Proceedings of the 37th International Conference on Machine Learning, JMLR.org, 2020.
  84. Aitchison, L., “A statistical theory of cold posteriors in deep neural networks,” International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Rd138pWXMvG.
  85. Le, Q. V., Smola, A. J., and Canu, S., “Heteroscedastic Gaussian Process Regression,” Proceedings of the 22nd International Conference on Machine Learning, Association for Computing Machinery, New York, NY, USA, 2005, p. 489–496. 10.1145/1102351.1102413, URL https://doi.org/10.1145/1102351.1102413.
  86. 10.1063/1.5094943, URL https://doi.org/10.1063/1.5094943.
  87. Du, X., He, P., and Martins, J. R., “Rapid airfoil design optimization via neural networks-based parameterization and surrogate modeling,” Aerospace Science and Technology, Vol. 113, 2021, p. 106701. https://doi.org/10.1016/j.ast.2021.106701, URL https://www.sciencedirect.com/science/article/pii/S127096382100211X.
  88. Meng, C., Gao, R., Kingma, D. P., Ermon, S., Ho, J., and Salimans, T., “On Distillation of Guided Diffusion Models,” NeurIPS 2022 Workshop on Score-Based Methods, 2022. URL https://openreview.net/forum?id=6QHpSQt6VR-.
  89. Song, J., Meng, C., and Ermon, S., “Denoising Diffusion Implicit Models,” International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=St1giarCHLP.
  90. Xu, K., Zhang, M., Li, J., Du, S. S., Kawarabayashi, K.-I., and Jegelka, S., “How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks,” International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=UH-cmocLJC.
  91. Xiao, H., and Cinnella, P., “Quantification of model uncertainty in RANS simulations: A review,” Progress in Aerospace Sciences, Vol. 108, 2019, pp. 1–31. https://doi.org/10.1016/j.paerosci.2018.10.001, URL https://www.sciencedirect.com/science/article/pii/S0376042118300952.
  92. Agrawal, A., and Koutsourelakis, P.-S., “A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty,” , 2023. https://doi.org/10.48550/arXiv.2307.02432.
  93. Sekar, V., Zhang, M., Shu, C., and Khoo, B. C., “Inverse Design of Airfoil Using a Deep Convolutional Neural Network,” AIAA Journal, Vol. 57, No. 3, 2019b, pp. 993–1003. 10.2514/1.J057894, URL https://doi.org/10.2514/1.J057894.
  94. Li, J., Zhang, M., Martins, J. R. R. A., and Shu, C., “Efficient Aerodynamic Shape Optimization with Deep-Learning-Based Geometric Filtering,” AIAA Journal, Vol. 58, No. 10, 2020, pp. 4243–4259. 10.2514/1.J059254, URL https://doi.org/10.2514/1.J059254.
  95. Cook, L. W., and Jarrett, J. P., “Robust Airfoil Optimization and the Importance of Appropriately Representing Uncertainty,” AIAA Journal, Vol. 55, No. 11, 2017, pp. 3925–3939. 10.2514/1.J055459, URL https://doi.org/10.2514/1.J055459.
  96. Huyse, L., Padula, S. L., Lewis, R. M., and Li, W., “Probabilistic Approach to Free-Form Airfoil Shape Optimization Under Uncertainty,” AIAA Journal, Vol. 40, No. 9, 2002, pp. 1764–1772. 10.2514/2.1881, URL https://doi.org/10.2514/2.1881.
  97. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I., “Attention is All you Need,” Advances in Neural Information Processing Systems, Vol. 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
  98. Chollet, F., “Xception: Deep Learning With Depthwise Separable Convolutions,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  99. Sunkara, R., and Luo, T., “No More Strided Convolutions Or Pooling: A New CNN Building Block For Low-Resolution Images And Small Objects,” Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III, Springer-Verlag, Berlin, Heidelberg, 2023, p. 443–459. 10.1007/978-3-031-26409-2_27, URL https://doi.org/10.1007/978-3-031-26409-2_27.
  100. Wen, Y., Vicol, P., Ba, J., Tran, D., and Grosse, R., “Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches,” International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJNpifWAb.
  101. Krishnan, R., Esposito, P., and Subedar, M., “Bayesian-Torch: Bayesian neural network layers for uncertainty estimation,” https://github.com/IntelLabs/bayesian-torch, Jan. 2022. 10.5281/zenodo.5908307, URL https://doi.org/10.5281/zenodo.5908307.
Citations (4)

Summary

  • The paper demonstrates that DDPMs can effectively capture turbulence uncertainty in airfoil simulations, outperforming traditional surrogate models.
  • It shows that DDPMs accurately predict drag coefficient distributions and field standard deviations, offering improved precision over BNNs and heteroscedastic models.
  • The study highlights that while DDPMs entail higher computational costs, their robust uncertainty quantification can significantly benefit aerodynamic optimization tasks.

A Technical Overview of Uncertainty-aware Surrogate Models for Airfoil Flow Simulations Using Denoising Diffusion Probabilistic Models

The paper, "Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models," presents a detailed exploration into modeling turbulence simulation uncertainty, employing a novel approach through denoising diffusion probabilistic models (DDPMs). This paper is particularly focused on airfoil flow simulations and compares DDPMs against traditional Bayesian neural networks (BNNs) and heteroscedastic models, revealing the potential and limitations of DDPMs in capturing the complexities of turbulence-related uncertainties.

Choice of Methodology

The authors have adeptly utilized DDPMs to enhance the surrogate modeling of airfoil flow simulations, a method sparingly explored in this domain. Typically recognized for outperforming other generative models like GANs or VAEs in generating high-quality outputs, DDPMs offer the unique advantage of providing granular access to the complete distribution of simulated solutions, thus inherently integrating uncertainty.

In this context, the paper discusses how DDPMs, unlike BNNs or heteroscedastic models, circumvent the need for a defined distribution of the given target variable by modeling it through a step-wise noise reduction process. This characteristic promotes a better representation of the uncertainty in RANS simulations, often hindered by BNNs due to their reliance on prior assumptions about network parameters or heteroscedastic models constrained by their Gaussian distribution premises.

Comparative Analysis

The authors present an extensive analysis of the learning capabilities across different methodologies:

  1. Capturing Distributional Uncertainty: The key differentiator for DDPMs in this paper is their ability to generate plausible sample solutions compared to the noisy and less reliable samples generated by both BNNs and heteroscedastic models. While BNN samples are affected by the assumed prior distribution of network parameters, the heteroscedastic models assume a simplistic data distribution, which fails to capture the complexity of turbulent flow fields.
  2. Numerical Results: Numerical experiments highlight that DDPMs surpass the alternative methods in both single and multi-parameter setups, particularly in accurately predicting fields’ standard deviations and handling extrapolations. The superior representation of turbulence uncertainty in airfoil simulations is evident in the DDPMs' accurate prediction of drag coefficients distributions compared to poorly modeled distributions by BNNs and heteroscedastic models.
  3. Sampling Efficiency: Despite their enhanced modeling potential, the computational cost associated with DDPMs is notably higher due to the need for more iterative steps to produce viable samples.

Implications and Future Work

The implications of successfully employing DDPMs in turbulence simulations extend beyond current methodologies due to their capacity to handle uncertainty with higher accuracy. As surrogates, models built using DDPMs offer enhanced reliability, which can be highly beneficial in iterative engineering tasks, such as aerodynamic shape optimization, where understanding uncertainty is crucial.

Theoretically, the application of DDPMs can be further extended into turbulence modeling where capturing unresolved turbulent scales as probabilistic distributions has been an ongoing challenge. Future avenues for research could explore acceleration techniques for DDPM sampling and an investigation into expanding the generalization abilities beyond the current limits, potentially by varying dataset parameters more effectively.

In conclusion, the paper establishes a firm premise for adopting DDPMs as a beneficial tool in airfoil flow simulations, setting a path for their more extensive utilization in predictive modeling where uncertainty quantification is paramount.

Youtube Logo Streamline Icon: https://streamlinehq.com