Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 40 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 470 tok/s Pro
Kimi K2 161 tok/s Pro
2000 character limit reached

Convolution-weighting method for the physics-informed neural network: A Primal-Dual Optimization Perspective (2506.19805v2)

Published 24 Jun 2025 in cs.LG

Abstract: Physics-informed neural networks (PINNs) are extensively employed to solve partial differential equations (PDEs) by ensuring that the outputs and gradients of deep learning models adhere to the governing equations. However, constrained by computational limitations, PINNs are typically optimized using a finite set of points, which poses significant challenges in guaranteeing their convergence and accuracy. In this study, we proposed a new weighting scheme that will adaptively change the weights to the loss functions from isolated points to their continuous neighborhood regions. The empirical results show that our weighting scheme can reduce the relative $L2$ errors to a lower value.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces a convolution-weighting approach that uses spatially coherent weights and a primal-dual framework to enhance PINN performance.
  • The method reformulates the PINN loss by dynamically updating weights via convolution operators, achieving lower errors and improved convergence in various PDE benchmarks.
  • The approach effectively captures high-frequency dynamics and handles stiff PDEs, making it beneficial for both forward and inverse problems in scientific computing.

Convolution-Weighting Method for the Physics-Informed Neural Network: A Primal-Dual Optimization Perspective

Introduction

Physics-Informed Neural Networks (PINNs) have gained prominence for solving Partial Differential Equations (PDEs) by integrating physical laws directly into the loss function of neural networks. While PINNs offer a universal framework for encoding PDE constraints, they often encounter challenges related to the convergence and accuracy due to computational limitations. This paper introduces a novel convolution-weighting scheme designed to enhance the training efficiency and accuracy of PINNs.

Problem Formulation and Proposed Method

The core innovation of this work is the development of a convolution-weighting framework. Unlike previous approaches that assign weights to isolated points, this method incorporates spatially coherent weighting by leveraging convolution operations. The convolution-weighting method addresses the mismatch between discrete point-wise training and the continuous nature of PDE systems by introducing spatial correlations in the residuals. The adopted primal-dual optimization perspective further integrates the benefits of both convolutional sampling and adaptive weighting.

Implementation and Algorithm Details

The proposed method reformulates the PINN loss function to account for spatially correlated weights, which are computed using a convolution operator applied to residuals:

LF(θ)=(x,t)ΩFλF(x,t)F[u^(θ)](x,t)2,\mathcal{L}_F(\theta) = \sum_{(\mathbf{x}, t) \in \Omega_F} \lambda_{F}(\mathbf{x}, t)\big| \mathcal{F}[\hat{u}(\theta)](\mathbf{x}, t) \big|^2,

where the weights λF(x,t)\lambda_{F}(\mathbf{x}, t) are dynamically updated during training based on the smoothed residuals. The method also incorporates an adaptive resampling strategy, wherein the sampling density is adjusted according to the importance determined by the residual magnitudes.

A min-max optimization framework is used to balance the trade-off between focusing on critical regions with high residuals and preventing overfitting to specific collocation points. This is formalized as:

minθmaxλ L(λ,θ)=λTWr(θ)12λTλ+L2(θ),\min_{\theta}\max_{\lambda}~\mathcal{L}(\lambda,\theta) = \lambda^T\sqrt{W} r(\theta) - \frac{1}{2}\lambda^T\lambda + \mathcal{L}_2(\theta),

where WW is a convolutional operator that smooths residuals across neighboring points.

Experimental Evaluation

The convolution-weighting method was evaluated across several benchmarks involving different types of PDEs:

  • 1D Heat Equation: CWP demonstrated superior performance in accurately capturing high-frequency dynamics with reduced computational cost.
  • 2D Klein-Gordon Equation: The method yielded lower relative L2L^2 errors and point-wise errors compared to other adaptive weighting strategies, showcasing its robustness for complex wave-like phenomena.
  • Viscous Burgers Equation: CWP effectively resolved sharp gradients near shock regions, a known challenge for conventional PINNs.
  • Unsteady Cylinder Flow (Navier-Stokes Equations): The method achieved significant improvements in accuracy for high-Reynolds-number flows using sparse observational data.
  • Inverse Problem (Poisson Equation): CWP excelled in reconstructing spatially varying coefficients, underscoring its utility in inverse problems. Figure 1

    Figure 1: Exact solution of the 1D Heat Equation.

Discussion

The convolution-weighting method for PINNs bridges the gap between discrete collocation-based training and the continuous nature of PDEs by integrating spatial coherence in residual weighting. By applying convolutional operators, the method elegantly enhances weight field continuity and promotes adaptive sampling strategies aligned with physical systems' dynamics.

The proposed technique demonstrates strengths in handling data-scarce scenarios, stiffness in PDE systems, and complex boundary conditions. It advances the state-of-the-art by significantly reducing computational errors and improving PINNs' efficiency and effectiveness in various domains.

Conclusion

This paper introduces a robust and efficient convolution-weighting approach for enhancing PINN training, particularly beneficial in scenarios with sparse data and highly complex PDEs. CWP sets the stage for future developments in physics-informed learning, offering a promising avenue for solving both forward and inverse problems in scientific computing. Future work should explore further optimizations and extend the method's applicability to even more challenging high-dimensional PDE scenarios.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Authors (2)

Youtube Logo Streamline Icon: https://streamlinehq.com