Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Denoising diffusion algorithm for inverse design of microstructures with fine-tuned nonlinear material properties (2302.12881v1)

Published 24 Feb 2023 in cs.LG

Abstract: In this paper, we introduce a denoising diffusion algorithm to discover microstructures with nonlinear fine-tuned properties. Denoising diffusion probabilistic models are generative models that use diffusion-based dynamics to gradually denoise images and generate realistic synthetic samples. By learning the reverse of a Markov diffusion process, we design an artificial intelligence to efficiently manipulate the topology of microstructures to generate a massive number of prototypes that exhibit constitutive responses sufficiently close to designated nonlinear constitutive responses. To identify the subset of microstructures with sufficiently precise fine-tuned properties, a convolutional neural network surrogate is trained to replace high-fidelity finite element simulations to filter out prototypes outside the admissible range. The results of this study indicate that the denoising diffusion process is capable of creating microstructures of fine-tuned nonlinear material properties within the latent space of the training data. More importantly, the resulting algorithm can be easily extended to incorporate additional topological and geometric modifications by introducing high-dimensional structures embedded in the latent space. The algorithm is tested on the open-source mechanical MNIST data set. Consequently, this algorithm is not only capable of performing inverse design of nonlinear effective media but also learns the nonlinear structure-property map to quantitatively understand the multiscale interplay among the geometry and topology and their effective macroscopic properties.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. The fenics project version 1.5. Archive of Numerical Software, 3(100), 2015.
  2. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
  3. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  4. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020.
  5. Universal machine learning for topology optimization. Computer Methods in Applied Mechanics and Engineering, 375:112739, 2021.
  6. Deep learning for synthetic microstructure generation in a materials-by-design framework for heterogeneous energetic materials. Scientific reports, 10(1):13307, 2020.
  7. Generative adversarial nets. Advanc in Neural, 2014.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  9. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  10. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021.
  11. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
  12. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
  13. Video pixel networks. In International Conference on Machine Learning, pages 1771–1779. PMLR, 2017.
  14. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
  15. Generating 3d structures from a 2d slice with gan-based dimensionality expansion. arXiv preprint arXiv:2102.07708, 2021.
  16. Exploration of optimal microstructure and mechanical properties in continuous microstructure space using a variational autoencoder. Materials & Design, 202:109544, 2021.
  17. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  18. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  19. Enhancing mechanical metamodels with a generative model-based augmented training dataset. Journal of Biomechanical Engineering, 144(12):121002, 2022.
  20. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020.
  21. Learning multiple layers of features from tiny images. 2009.
  22. Inverse-designed spinodoid metamaterials. npj Computational Materials, 6(1):73, 2020.
  23. Emma Lejeune. Mechanical mnist: A benchmark dataset for mechanical metamodels. Extreme Mechanics Letters, 36:100659, 2020.
  24. A transfer learning approach for microstructure reconstruction and structure-property predictions. Scientific reports, 8(1):13461, 2018.
  25. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. arXiv preprint arXiv:1812.01608, 2018.
  26. Synthesizing controlled microstructures of porous media using generative adversarial networks and reinforcement learning. Scientific reports, 12(1):9034, 2022.
  27. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162–8171. PMLR, 2021.
  28. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
  29. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning, pages 8857–8868. PMLR, 2021.
  30. Generating diverse high-fidelity images with vq-vae-2. Advances in neural information processing systems, 32, 2019.
  31. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
  32. Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.
  33. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
  34. Machine learning for topology optimization: Physics-based learning through an independent training strategy. Computer Methods in Applied Mechanics and Engineering, 398:115116, 2022.
  35. Topology optimization approaches: A comparative review. Structural and Multidisciplinary Optimization, 48(6):1031–1055, 2013.
  36. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR, 2015.
  37. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
  38. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448, 2020.
  39. Nvae: A deep hierarchical variational autoencoder. Advances in neural information processing systems, 33:19667–19679, 2020.
  40. Deep generative modeling for mechanistic-based learning and design of metamaterial systems. Computer Methods in Applied Mechanics and Engineering, 372:113377, 2020a.
  41. Mining structure–property relationships in polymer nanocomposites using data driven finite element analysis and multi-task convolutional neural networks. Molecular Systems Design & Engineering, 5(5):962–975, 2020b.
  42. Bioinspired structural materials. Nature materials, 14(1):23–36, 2015.
  43. On the use of artificial neural networks in topology optimisation. Structural and Multidisciplinary Optimization, 65(10):294, 2022.
  44. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
  45. Tonr: An exploration for a novel way combining neural network with topology optimization. Computer Methods in Applied Mechanics and Engineering, 386:114083, 2021.
Citations (39)

Summary

We haven't generated a summary for this paper yet.