Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

hp-VPINNs: Variational Physics-Informed Neural Networks With Domain Decomposition (2003.05385v1)

Published 11 Mar 2020 in cs.NE, cs.LG, cs.NA, and math.NA

Abstract: We formulate a general framework for hp-variational physics-informed neural networks (hp-VPINNs) based on the nonlinear approximation of shallow and deep neural networks and hp-refinement via domain decomposition and projection onto space of high-order polynomials. The trial space is the space of neural network, which is defined globally over the whole computational domain, while the test space contains the piecewise polynomials. Specifically in this study, the hp-refinement corresponds to a global approximation with local learning algorithm that can efficiently localize the network parameter optimization. We demonstrate the advantages of hp-VPINNs in accuracy and training cost for several numerical examples of function approximation and solving differential equations.

Citations (442)

Summary

  • The paper introduces hp-VPINNs, a variational framework combining domain decomposition and hp-refinement for solving differential equations.
  • The paper leverages high-order polynomial test functions to capture local features, outperforming standard PINNs near steep gradients and singularities.
  • The paper demonstrates robust inverse problem solutions by accurately identifying unknown parameters, highlighting its scalability and precision in complex scenarios.

Overview of hp-VPINNs for Solving Differential Equations

The paper introduces hp-Variational Physics-Informed Neural Networks (hp-VPINNs), a framework for solving differential equations using neural networks enhanced by the application of domain decomposition and projection onto high-order polynomial spaces. This method leverages the powerful approximation capabilities of shallow and deep neural networks while incorporating hp-refinement strategies, which are typically found in numerical analysis through domain decomposition and polynomial order augmentation. The global trial space utilizes the neural network itself, whereas the test space is formed by piecewise polynomials.

Key Methodological Developments

  • Variational Framework: The hp-VPINNs are rooted in the variational form of physics-informed problems, utilizing high-order polynomial test functions. This choice allows the formulation to take advantage of the accuracy of spectral methods while maintaining the flexibility of neural networks.
  • Domain Decomposition: By decomposing the computational domain into subdomains, the hp-VPINNs provide an effective approach for localizing network parameter optimization. This facet of the method enables flexible computational strategies, potentially allowing parallelization for increased computational efficiency.
  • hp-Refinement: The paper explores the hp-refinement, allowing both h-refinement (mesh refinement) and p-refinement (increase in the polynomial degree of test functions). This hybrid refinement enhances the method’s capacity to adapt to problem-specific needs such as capturing steep gradients or singularities in the solution.

Numerical Results and Insights

The numerical experiments showcase the efficacy of hp-VPINNs across various differential equation scenarios, including Poisson's equations and advection-diffusion equations in one and two dimensions. Notably, the formulation offers precision in solving problems characterized by steep gradients or boundary layers, both inherently challenging for traditional methods.

  • Function Approximation: The inclusion of localized test functions in the subdomain allows the capture of local features, providing higher accuracy near discontinuities.
  • Comparison with PINNs: The hp-VPINNs surpass standard Physics-Informed Neural Networks (PINNs) in accuracy for problems where the representation of local features like singularities is crucial, as they mitigate the oscillations observed in weak form-based approximations.
  • Inverse Problems: The framework successfully identifies unknown parameters in differential equations, such as diffusivity coefficients, demonstrating robustness in inverse problem settings.

Implications and Future Directions

The hp-VPINNs represent an intersection of machine learning and classical numerical techniques, opening pathways to exploit their complementary strengths. They offer a novel avenue for hybrid computational frameworks that balance the expressiveness of neural networks and the rigor of numerical methods. Key implications include:

  • Scalability: The ability to localize computations through domain decomposition enhances scalability, making hp-VPINNs potentially suitable for large-scale and high-dimensional problems.
  • Adaptability: The flexibility in adjusting both mesh refinement and polynomial orders suggests adaptability to specific problem demands, improving efficiency without sacrificing accuracy.
  • Integration of Advanced Techniques: Future work can integrate techniques like adaptive mesh refinement or experiment with different neural network architectures to further optimize performance.

The paper sets a foundational methodology for the execution of variational neural network models in solving complex differential equations, providing a robust alternative to classical and contemporary techniques. The advancement of computational methods like hp-VPINNs ensures that machine learning continues to make impactful contributions to scientific computing disciplines.