Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning for Multiscale Damage Analysis via Physics-Informed Recurrent Neural Network (2212.01880v2)

Published 4 Dec 2022 in cs.CE

Abstract: Direct numerical simulation of hierarchical materials via homogenization-based concurrent multiscale models poses critical challenges for 3D large scale engineering applications, as the computation of highly nonlinear and path-dependent material constitutive responses at the lower scale causes prohibitively high computational costs. In this work, we propose a physics-informed data-driven deep learning model as an efficient surrogate to emulate the effective responses of heterogeneous microstructures under irreversible elasto-plastic hardening and softening deformation. Our contribution contains several major innovations. First, we propose a novel training scheme to generate arbitrary loading sequences in the sampling space confined by deformation constraints where the simulation cost of homogenizing microstructural responses per sequence is dramatically reduced via mechanistic reduced-order models. Second, we develop a new sequential learner that incorporates thermodynamics consistent physics constraints by customizing training loss function and data flow architecture. We additionally demonstrate the integration of trained surrogate within the framework of classic multiscale finite element solver. Our numerical experiments indicate that our model shows a significant accuracy improvement over pure data-driven emulator and a dramatic efficiency boost than reduced models. We believe our data-driven model provides a computationally efficient and mechanics consistent alternative for classic constitutive laws beneficial for potential high-throughput simulations that needs material homogenization of irreversible behaviors.

Summary

We haven't generated a summary for this paper yet.