Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving Spatiotemporal PDEs (2106.14103v1)

Published 26 Jun 2021 in cs.LG, cs.CL, cs.NA, and math.NA

Abstract: Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines. Recent advances in deep learning have shown the great potential of physics-informed neural networks (PINNs) to solve PDEs as a basis for data-driven modeling and inverse analysis. However, the majority of existing PINN methods, based on fully-connected NNs, pose intrinsic limitations to low-dimensional spatiotemporal parameterizations. Moreover, since the initial/boundary conditions (I/BCs) are softly imposed via penalty, the solution quality heavily relies on hyperparameter tuning. To this end, we propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhyCRNet-s) for solving PDEs without any labeled data. Specifically, an encoder-decoder convolutional long short-term memory network is proposed for low-dimensional spatial feature extraction and temporal evolution learning. The loss function is defined as the aggregated discretized PDE residuals, while the I/BCs are hard-encoded in the network to ensure forcible satisfaction (e.g., periodic boundary padding). The networks are further enhanced by autoregressive and residual connections that explicitly simulate time marching. The performance of our proposed methods has been assessed by solving three nonlinear PDEs (e.g., 2D Burgers' equations, the $\lambda$-$\omega$ and FitzHugh Nagumo reaction-diffusion equations), and compared against the start-of-the-art baseline algorithms. The numerical results demonstrate the superiority of our proposed methodology in the context of solution accuracy, extrapolability and generalizability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pu Ren (19 papers)
  2. Chengping Rao (10 papers)
  3. Yang Liu (2253 papers)
  4. Jianxun Wang (8 papers)
  5. Hao Sun (383 papers)
Citations (166)

Summary

Physics-informed Convolutional-Recurrent Network for Solving PDEs

The paper presents a novel approach called PhyCRNet and its variant PhyCRNet-s for solving partial differential equations (PDEs) within a spatiotemporal context using a neural network architecture. It proposes a paradigm shift from the more conventional physics-informed neural networks (PINNs) by leveraging convolutional-recurrent networks to address challenges faced by PINNs, including scalability and the imposition of initial and boundary conditions (I/BCs). This paper is focused on employing deep learning techniques to tackle the inherent complexity in solving PDEs, particularly those demonstrating sharp gradients or complex morphologies.

Neural Network Design and Methodology

PhyCRNet structures its architecture around convolutional and recurrent units, which inherit feature-extraction advantages from convolutional networks and temporal sequence modeling capabilities from LSTM units. This combination is encapsulated in an encoder-decoder motif, enabling the network to efficiently process spatial information and represent temporal dynamics. The network architecture incorporates essential components:

  1. Encoder-Decoder Module: Facilitates low-dimensional spatial feature extraction and reconstruction using convolutional layers, offering scalability to multi-dimensional PDEs.
  2. ConvLSTM Integration: Enhances temporal evolution learning by capturing dependencies via convolutions in recurrent cells.
  3. Rigorous Encoding of I/BCs: Promotes solution accuracy by enforcing boundary conditions strictly via padding, overcoming PINNs' reliance on loss function penalties for condition imposition.
  4. Filtering-based Differentiation: Utilizes convolutional filters to perform numerical differentiation, ensuring precise gradient computations pertinent for PDE formulation.
  5. Residual Learning and AR Scheme: Implements a residual connection resembling the Euler time-stepping approach, augmented by autoregressive processes to facilitate robust time marching and minimize error propagation in testing phases.

PhyCRNet-s introduces a modification allowing periodic skipping of the encoder, optimizing computational efficiency further while retaining necessary temporal convolution operations for extrapolating solutions over extended periods.

Numerical Experimentation and Evaluation

The paper comprehensively demonstrates the efficacy of PhyCRNet and PhyCRNet-s through experiments involving three canonical nonlinear PDEs: 2D Burgers' equations, λ\lambda-ω\omega reaction-diffusion equations, and FitzHugh-Nagumo equations. Employing synthetic initial conditions sampled from Gaussian distributions and generating reference solutions through established numerical methods, these experiments reveal superior solution accuracy and extrapolation capabilities of PhyCRNet compared to standard PINN approaches.

  • Error Propagation: The paper reports persistent low root-mean-square errors across both training and extrapolation phases, highlighting PhyCRNet's capacity to maintain accuracy over extensive temporal intervals.
  • Extrapolation and Generalization: PhyCRNet demonstrates remarkable robustness in predicting solutions beyond training input conditions, adapting effectively to new initial conditions, a critical advantage over PINNs.

Implications and Future Directions

The proposed architectures position themselves as formidable contenders in solving spatiotemporal PDEs, exhibiting promising potential for broader applications in surrogate modeling and inverse analysis. The methodologies outlined could efficaciously enhance simulations pertinent to fields such as fluid dynamics, material science, and biological modeling without necessity for substantial preconditioned data, thus supporting data assimilation tasks in sparse data environments.

The next phase of exploration involves addressing challenges in irregular domains using graph neural networks, optimizing temporal discretization with advanced schemes like higher-order Runge-Kutta methods, and exploring the possibility of dynamic encoding for diverse boundary conditions. This paper lays a solid groundwork, inviting further scrutiny and iteration within the scientific computing community towards versatile, efficient deep learning-based PDE solvers.